The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Ultrasound is progressing toward becoming an affordable and versatile solution to medical imaging. With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging as it requires trained operators in close proximity to patients for long period of time. In this work, we investigate the important yet seldom-studied problem of scan target localization, under the setting of lung ultrasound imaging. We propose a purely vision-based, data driven method that incorporates learning-based computer vision techniques. We combine a human pose estimation model with a specially designed regression model to predict the lung ultrasound scan targets, and deploy multiview stereo vision to enhance the consistency of 3D target localization. While related works mostly focus on phantom experiments, we collect data from 30 human subjects for testing. Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69){\deg} for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets. Moreover, our approach can serve as a general solution to other types of ultrasound modalities. The code for implementation has been released.
translated by 谷歌翻译
CNN-based surrogates have become prevalent in scientific applications to replace conventional time-consuming physical approaches. Although these surrogates can yield satisfactory results with significantly lower computation costs over small training datasets, our benchmarking results show that data-loading overhead becomes the major performance bottleneck when training surrogates with large datasets. In practice, surrogates are usually trained with high-resolution scientific data, which can easily reach the terabyte scale. Several state-of-the-art data loaders are proposed to improve the loading throughput in general CNN training; however, they are sub-optimal when applied to the surrogate training. In this work, we propose SOLAR, a surrogate data loader, that can ultimately increase loading throughput during the training. It leverages our three key observations during the benchmarking and contains three novel designs. Specifically, SOLAR first generates a pre-determined shuffled index list and accordingly optimizes the global access order and the buffer eviction scheme to maximize the data reuse and the buffer hit rate. It then proposes a tradeoff between lightweight computational imbalance and heavyweight loading workload imbalance to speed up the overall training. It finally optimizes its data access pattern with HDF5 to achieve a better parallel I/O throughput. Our evaluation with three scientific surrogates and 32 GPUs illustrates that SOLAR can achieve up to 24.4X speedup over PyTorch Data Loader and 3.52X speedup over state-of-the-art data loaders.
translated by 谷歌翻译
现有检测方法通常使用参数化边界框(Bbox)进行建模和检测(水平)对象,并将其他旋转角参数用于旋转对象。我们认为,这种机制在建立有效的旋转检测回归损失方面具有根本的局限性,尤其是对于高精度检测而言,高精度检测(例如0.75)。取而代之的是,我们建议将旋转的对象建模为高斯分布。一个直接的优势是,我们关于两个高斯人之间距离的新回归损失,例如kullback-leibler Divergence(KLD)可以很好地对齐实际检测性能度量标准,这在现有方法中无法很好地解决。此外,两个瓶颈,即边界不连续性和正方形的问题也消失了。我们还提出了一种有效的基于高斯度量的标签分配策略,以进一步提高性能。有趣的是,通过在基于高斯的KLD损失下分析Bbox参数的梯度,我们表明这些参数通过可解释的物理意义进行了动态更新,这有助于解释我们方法的有效性,尤其是对于高精度检测。我们使用量身定制的算法设计将方法从2-D扩展到3-D,以处理标题估计,并在十二个公共数据集(2-D/3-D,空中/文本/脸部图像)上进行了各种基本检测器的实验结果。展示其优越性。
translated by 谷歌翻译
本文探讨了一种机器学习方法,用于从单芯片MMWave雷达产生高分辨率点云。与激光雷达和基于视觉的系统不同,MMWave雷达可以在恶劣的环境中运行,并通过烟雾,雾气和灰尘等遮挡。不幸的是,与激光点云相比,当前的MMWAVE处理技术可提供差的空间分辨率。本文介绍了Radarhd,这是一种端到端的神经网络,该网络从低分辨率雷达输入中构造了激光雷达点云。由于存在镜面和虚假的反射,增强雷达图像是具有挑战性的。由于信号的类似SINC的扩展模式,雷达数据也不能很好地映射到传统的图像处理技术。我们通过在大量的RAW I/Q雷达数据上训练Radarhd与各种室内环境中的LiDar Point云配对来克服这些挑战。我们的实验表明,即使在训练期间未观察到的场景和存在浓烟的情况下,也能够产生丰富的点云。此外,Radarhd的点云足够高,足以与现有的LiDAR ODOMETIRE和映射工作流程配合使用。
translated by 谷歌翻译
为了跟踪视频中的目标,当前的视觉跟踪器通常采用贪婪搜索每个帧中目标对象定位,也就是说,将选择最大响应分数的候选区域作为每个帧的跟踪结果。但是,我们发现这可能不是一个最佳选择,尤其是在遇到挑战性的跟踪方案(例如重闭塞和快速运动)时。为了解决这个问题,我们建议维护多个跟踪轨迹并将光束搜索策略应用于视觉跟踪,以便可以识别出更少的累积错误的轨迹。因此,本文介绍了一种新型的基于梁搜索策略的新型多代理增强学习策略,称为横梁。它主要是受图像字幕任务的启发,该任务将图像作为输入,并使用Beam搜索算法生成多种描述。因此,我们通过多个并行决策过程来将跟踪提出作为样本选择问题,每个过程旨在将一个样本作为每个帧的跟踪结果选择。每个维护的轨迹都与代理商相关联,以执行决策并确定应采取哪些操作来更新相关信息。处理所有帧时,我们将最大累积分数作为跟踪结果选择轨迹。在七个流行的跟踪基准数据集上进行了广泛的实验证实了所提出的算法的有效性。
translated by 谷歌翻译
最近的研究表明,诸如RNN和Transformers之类的深度学习模型为长期预测时间序列带来了显着的性能增长,因为它们有效地利用了历史信息。但是,我们发现,如何在神经网络中保存历史信息,同时避免过度适应历史上的噪音,这仍然有很大的改进空间。解决此问题可以更好地利用深度学习模型的功能。为此,我们设计了一个\ textbf {f}要求\ textbf {i} mpraved \ textbf {l} egendre \ textbf {m} emory模型,或{\ bf film}:它应用了legendre promotions topimate legendre provientions近似历史信息,近似历史信息,使用傅立叶投影来消除噪声,并添加低级近似值以加快计算。我们的实证研究表明,所提出的膜显着提高了由(\ textbf {20.3 \%},\ textbf {22.6 \%})的多变量和单变量长期预测中最新模型的准确性。我们还证明,这项工作中开发的表示模块可以用作一般插件,以提高其他深度学习模块的长期预测性能。代码可从https://github.com/tianzhou2011/film/获得。
translated by 谷歌翻译
与常规的基于统计参数的方法相比,已经证明了基于深度学习的歌声综合(SVS)系统可以灵活地产生更好的质量唱歌。但是,神经系统通常是渴望数据的,并且很难通过有限的公共可用培训数据来达到合理的歌唱质量。在这项工作中,我们探索了不同的数据增强方法,以促进SVS系统的培训,包括基于沥青增强和混合增强为SVS定制的几种策略。为了进一步稳定培训,我们介绍了循环一致的培训策略。在两个公开唱歌数据库上进行的广泛实验表明,我们提出的增强方法和稳定训练策略可以显着改善客观和主观评估的绩效。
translated by 谷歌翻译
基于宽高的情绪分析(ABSA)是一种细粒度的情绪分析任务。为了更好地理解长期复杂的句子,并获得准确的方面的信息,这项任务通常需要语言和致辞知识。然而,大多数方法采用复杂和低效的方法来结合外部知识,例如,直接搜索图形节点。此外,尚未彻底研究外部知识和语言信息之间的互补性。为此,我们提出了一个知识图形增强网络(kgan),该网络(kgan)旨在有效地将外部知识与明确的句法和上下文信息纳入。特别是,kgan从多个不同的角度来看,即基于上下文,语法和知识的情绪表示。首先,kgan通过并行地了解上下文和句法表示,以完全提取语义功能。然后,KGAN将知识图形集成到嵌入空间中,基于该嵌入空间,基于该嵌入空间,通过注意机制进一步获得了方面特异性知识表示。最后,我们提出了一个分层融合模块,以便以本地到全局方式补充这些多视图表示。关于三个流行的ABSA基准测试的广泛实验证明了我们康复的效果和坚固性。值得注意的是,在罗伯塔的预用模型的帮助下,Kggan实现了最先进的性能的新记录。
translated by 谷歌翻译
近年来,线性分配问题(LAP)的可分解求解器(LAP)引起了很多研究的关注,通常嵌入到学习框架中作为组件。然而,以前的算法,有或没有学习策略,通常随着问题大小的增量而遭受最优性的降低。在本文中,我们提出了一种基于深图网络的学习线性分配求解器。具体地,我们首先将成本矩阵转换为二分图,并将分配任务转换为从构造的图表中选择可靠的边缘的问题。随后,开发了深图网络以聚合和更新节点和边的特征。最后,网络预测指示指示赋值关系的每个边缘的标签。合成数据集的实验结果表明,我们的方法优于最先进的基线,并以问题尺寸的增量达到始终如一的高精度。此外,我们还与最先进的基线求解器相比,嵌入了所提出的求解器,进入流行的多目标跟踪(MOT)框架,以以端到端的方式训练跟踪器。 MOT基准的实验结果表明,所提出的LAP解算器通过最大的边缘改善跟踪器。
translated by 谷歌翻译